68 research outputs found

    Energy landscape analysis of neuroimaging data

    Get PDF
    Computational neuroscience models have been used for understanding neural dynamics in the brain and how they may be altered when physiological or other conditions change. We review and develop a data-driven approach to neuroimaging data called the energy landscape analysis. The methods are rooted in statistical physics theory, in particular the Ising model, also known as the (pairwise) maximum entropy model and Boltzmann machine. The methods have been applied to fitting electrophysiological data in neuroscience for a decade, but their use in neuroimaging data is still in its infancy. We first review the methods and discuss some algorithms and technical aspects. Then, we apply the methods to functional magnetic resonance imaging data recorded from healthy individuals to inspect the relationship between the accuracy of fitting, the size of the brain system to be analyzed, and the data length.Comment: 22 pages, 4 figures, 1 tabl

    Balance network of asymmetric simple exclusion process

    Full text link
    We investigate a balance network of the asymmetric simple exclusion process (ASEP). Subsystems consisting of ASEPs are connected by bidirectional links with each other, which results in balance between every pair of subsystems. The network includes some specific important cases discussed in earlier works such as the ASEP with the Langmuir kinetics, multiple lanes and finite reservoirs. Probability distributions of particles in the steady state are exactly given in factorized forms according to their balance properties. Although the system has nonequilibrium parts, the expressions are well described in a framework of statistical mechanics based on equilibrium states. Moreover, the overall argument does not depend on the network structures, and the knowledge obtained in this work is applicable to a broad range of problems

    Reinforcement Learning Explains Conditional Cooperation and Its Moody Cousin

    Get PDF
    Direct reciprocity, or repeated interaction, is a main mechanism to sustain cooperation under social dilemmas involving two individuals. For larger groups and networks, which are probably more relevant to understanding and engineering our society, experiments employing repeated multiplayer social dilemma games have suggested that humans often show conditional cooperation behavior and its moody variant. Mechanisms underlying these behaviors largely remain unclear. Here we provide a proximate account for this behavior by showing that individuals adopting a type of reinforcement learning, called aspiration learning, phenomenologically behave as conditional cooperator. By definition, individuals are satisfied if and only if the obtained payoff is larger than a fixed aspiration level. They reinforce actions that have resulted in satisfactory outcomes and anti-reinforce those yielding unsatisfactory outcomes. The results obtained in the present study are general in that they explain extant experimental results obtained for both so-called moody and non-moody conditional cooperation, prisoner's dilemma and public goods games, and well-mixed groups and networks. Different from the previous theory, individuals are assumed to have no access to information about what other individuals are doing such that they cannot explicitly use conditional cooperation rules. In this sense, myopic aspiration learning in which the unconditional propensity of cooperation is modulated in every discrete time step explains conditional behavior of humans. Aspiration learners showing (moody) conditional cooperation obeyed a noisy GRIM-like strategy. This is different from the Pavlov, a reinforcement learning strategy promoting mutual cooperation in two-player situations
    • …
    corecore